VPU TECHNOLOGY &GPGPU COMPUTING Arka Ghosh(9007900477a@gmail.com) B.Tech Computer Science & Engineering DELIVERED AT Seacom Engineering College,CSE Dept DATE 7 th  April’2011
What Is VPU? VPU is Visual Processing Unit it is more generally known as Graphics Processing Unit or GPU. The Graphics Processing Unit is a MASSIVELY PARALAL & MASSIVELY MULTITHREADED microprocessor. HyBrid Solutions NVIDIA SLI ATI Raedon CROSSFIREX Why GPU? GPU is used for high performance Computing . Long time ago work of GPU was to offload & accelerate graphics rendering from the CPU, but now a days the scene has changed.GPU has capability to work like a CPU,in some complex computational cases it beats the CPU. GPU Solutions:- We can get GPU in two forms  1.Integrated GPU It is integrated on the chipset of MotherBoard.It has low memory bandwidth & its latency time is much more than Dedicated ones. i.e-NVIDIA 730a Chipset provides 8200GT GPU with 540Mhz core. 2.Discrete or Dedicated GPU It is the most power full form of GPU.it is generally installed on PCIe or AGP port of MotherBoard.It has its own memory module. i.e-ATI Raedon HD 5970 X2 has Compute power of  4.64 TeraFlops with 3200 Stream Processors & 1 Ghz core  © Arka Ghosh 2011
What is PPU? PPU is physics processing unit. which specialized for calculation of rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects. LARRABEE FUSION The Main Leader of PPU is AGIA PhysX. It consists of a general purpose RISC core controlling an array of custom SIMD floating point VLIW processors working in local banked memories, with a switch-fabric to manage transfers between them. There is no cache-hierarchy as in a CPU or GPU. GPUs vs PPUs:- The drive toward GPGPU is making GPUs more and more suitable for the job of a PPU. ULTIMATE FATE OF GPU:- 1.Intel’s LARRABEE 2.AMD’s FUSION © Arka Ghosh 2011
-:INTO THE ARCHITECTURE:- Use of SPM:- SPM or SCRATCHPAD MEMORY is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress.Inreference to a microprocessor (&quot;CPU&quot;), scratchpad refers to a special high-speed memory circuit used to hold small items of data for rapid retrieval. EXAMPLE:-•  NVIDIA's 8800 GPU running under CUDA provides 16KiB of Scratchpad per thread-bundle when being used for gpgpu tasks. STREAM PROCESSING:  The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed.  1.Uniform Stream. Applications:- Compute Intensity Data Parallelism Data Locality Conventional, sequential paradigm  Parallel SIMD paradigm, packed registers (SWAR) for(int el = 0; el < 100; el++)  // for each vector vector_sum(result[el], source0[el], source1[el]); for(int i = 0; i < 100 * 4; i++) result[i] = source0[i] + source1[i]; © Arka Ghosh 2011
 Graphics Pipeline  The graphics pipeline typically accepts some representation of a three-dimensional scene as an input and results in a 2D raster image as output. OpenGL and Direct3D are two notable graphics pipeline models accepted as widespread industry standards. Stages of the graphics pipeline:-> 1.Transformation 2.Per-vertex lighting 3.Viewing transformation 4.Primitives generation 5.Projection transformation 6.Clipping 7.Viewport transformation 8.Scan conversion or rasterization 9.Texturing, fragment shading 10.Display Shader  Shaders are used to program the graphics processing unit (GPU) programmable rendering pipeline, which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; with shaders, customized effects can be used. <<<Types Of Shader>>> Vertex shaders. Pixel shaders Geometrical shaders USEFULLNESS OF SHADER:- 1.Simplified graphic processing unit pipeline 2.Parallel processing Programming shaders We can programe shader by using OpenGL,Cg & Microsoft HLSL.  © Arka Ghosh 2011
GPU CLUSTER  What is Cluster? GPU CLUSTER  Each node of the cluster is GPU. 1.Homogeneous 2.Heterogeneous Components Hardware (Other):- I nterconnector Software:- 1. Operating System 2. GPU driver for the each type of GPU present in each cluster node. 3. Clustering API (such as the Message Passing Interface, MPI). .. Algorithm mapping GPU SWITCHING  Means Switching from one cluster node to another. WINDOWS Switching. LINUX Switching. © Arka Ghosh 2011
What Is GPGPU? GPGPU stands for general purpose graphics processin unit computing.Using GPU as CPU is the GPGPU computing NVIDIA CUDA:- It is a GPGPU Computing architecture. It provides heterogeneous computing environment. Why GPU Computing? To achive high performance computing. Minimize ERROR LOW power Consumption..GO GREEN. NVIDIA FLEXES TESLA MUSCLE
CUDA Kernels and Threads Parallel   portions   of   an   application   are   executed   on the   device   as   kernels One   kernel   is   executed   at   a   time Many   threads   execute   each   kernel Differences   between   CUDA   and   CPU   threads CUDA   threads   are   extremely   lightweight CUDA   uses   1000s   of   threads   to   achieve   efficiency Multi-core   CPUs   can   use   only   a   few Definitions Device   =   GPU Host   =   CPU Kernel   =   function   that   runs   on   the   device Data Movement Example int   main(void) { float   *a_h,   *b_h;   //   host   data float   *a_d,   *b_d;   //   device   data int   N   =   14,   nBytes,   i   ; nBytes   =   N*sizeof(float); a_h   =   (float   *)malloc(nBytes); b_h   =   (float   *)malloc(nBytes); cudaMalloc((void   **)   &a_d,   nBytes); cudaMalloc((void   **)   &b_d,   nBytes); for   (i=0,   i<N;   i++)   a_h[i]   =   100.f   +   i; cudaMemcpy(a_d,   a_h,   nBytes,   cudaMemcpyHostToDevice); cudaMemcpy(b_d,   a_d,   nBytes,   cudaMemcpyDeviceToDevice); cudaMemcpy(b_h,   b_d,   nBytes,   cudaMemcpyDeviceToHost); for   (i=0;   i<   N;   i++)   assert(   a_h[i]   ==   b_h[i]   ); free(a_h);   free(b_h);   cudaFree(a_d);   cudaFree(b_d); return   0; } © Arka Ghosh 2011
© Arka Ghosh2011 10-Series   Architecture 240   thread   processors   execute   kernel   threads 30   multiprocessors ,   each   contains 8   thread   processors One   double-precision   unit Shared   memory   enables   thread   cooperation Thread Processors Multiprocessor Shared Memory Double
Execution   Model Software Hardware Threads   are   executed   by   thread   processors Thread Thread Processor Multiprocessor Thread   blocks   are   executed   on   multiprocessors Thread   blocks   do   not   migrate Several   concurrent   thread   blocks   can   reside   on Thread Block ... Grid Device one   multiprocessor   -   limited   by   multiprocessor resources   (shared   memory   and   register   file) A   kernel   is   launched   as   a   grid   of   thread   blocks Only   one   kernel   can   execute   on   a   device   at one   time © Arka Ghosh2011
Tesla Architecture  © Arka Ghosh 2011
Time GigaThread   Hardware   Thread   Scheduler Concurrent   Kernel   Execution   +   Faster   Context   Switch Serial   Kernel   Execution Parallel   Kernel   Execution Kernel   1 Kernel   1 Kernel   2 Kernel   2 Ker 4 nel Kernel   3 Kernel   5 Kernel   3 Kernel   4 Kernel   5 Kernel   2 Kernel   2 © Arka Ghosh2011
EXAMPLE:-> MATLAB CODE FOR SIMPLE FFT(CPU HOST MODE) FOR DEVICE( nVidia QUADRO Fx 5200*2) clear ALL; t1=cputime; x=rand(2^20,1); f=fft(x); t2=cputime; t3=t2-t1; Here t3=0.4056 Clear ALL; t1=cputime; x=rand(2^20,1); gx=gpuArray(x); f=fft(gx); t2=cputime; t3=t2-t1; Here t3=0.006056 clear ALL t1=cputime; x=rand(50); y=rand(50); z=rand(50); a=10; b=20; c=30; d=40; f=a*(x^2)+b*(x*y)+c*(y^3)+d*(z^4); net=feedforwardnet(800); net=trainlm(net,x,f); t2=cputime; t3=t2-t1; MATLAB code For Simple ANN  For CPU t3=250.2154 For GPU t3=122.25 So we can see that The GPU is nearabout 204% efficient than CPU. © Arka Ghosh 2011
CONCLUSION  C for the GPU Multi-GPU Computing Massively Multi-threaded Computing Architecture Compatible with Industry Standard Architectures WHERE GPGPU IS USED? MIT-for educational & Scientific Research Purpose Stanford University--for educational & Scientific Research Purpose NCSA (National Center for Supercomputing Applications) NASA Machine Learning & AI field Machine Vision(Mainly Robot Vision) Tablets © Arka Ghosh 2011
Acknowledgement  Mriganka Chakraborty(prof. Secom Engineering College) Saibal Chakraborty Dr.Nicolas Pinto .prof. of MIT.-Advanced Supercomputing Dept T.Halfhill-NVIDIA Corp Developer Guide GOOGLE
THANK YOU

More Related Content

PPT
Vpu technology &gpgpu computing
PPT
PDF
Cuda tutorial
PPT
Introduction to parallel computing using CUDA
PDF
GPU: Understanding CUDA
PDF
Nvidia cuda tutorial_no_nda_apr08
PPTX
Intro to GPGPU with CUDA (DevLink)
PDF
Introduction to CUDA
Vpu technology &gpgpu computing
Cuda tutorial
Introduction to parallel computing using CUDA
GPU: Understanding CUDA
Nvidia cuda tutorial_no_nda_apr08
Intro to GPGPU with CUDA (DevLink)
Introduction to CUDA

What's hot (19)

PDF
Cuda introduction
PPTX
PPTX
Gpu with cuda architecture
PDF
PDF
NVidia CUDA Tutorial - June 15, 2009
PPTX
Intro to GPGPU Programming with Cuda
PDF
A beginner’s guide to programming GPUs with CUDA
PDF
Kato Mivule: An Overview of CUDA for High Performance Computing
PPTX
GPGPU programming with CUDA
PPTX
PPTX
Cuda Architecture
PDF
Computing using GPUs
PDF
GPU Programming
PDF
Introduction to CUDA C: NVIDIA : Notes
PPT
NVidia CUDA for Bruteforce Attacks - DefCamp 2012
PDF
Introduction to GPU Programming
PDF
Example Application of GPU
PDF
PG-Strom - GPU Accelerated Asyncr
PDF
Gpu perf-presentation
Cuda introduction
Gpu with cuda architecture
NVidia CUDA Tutorial - June 15, 2009
Intro to GPGPU Programming with Cuda
A beginner’s guide to programming GPUs with CUDA
Kato Mivule: An Overview of CUDA for High Performance Computing
GPGPU programming with CUDA
Cuda Architecture
Computing using GPUs
GPU Programming
Introduction to CUDA C: NVIDIA : Notes
NVidia CUDA for Bruteforce Attacks - DefCamp 2012
Introduction to GPU Programming
Example Application of GPU
PG-Strom - GPU Accelerated Asyncr
Gpu perf-presentation
Ad

Similar to Vpu technology &gpgpu computing (20)

PPT
Vpu technology &gpgpu computing
PPTX
GPU Computing: A brief overview
PDF
Cuda Without a Phd - A practical guick start
PPTX
Introduction to Accelerators
PPT
Cuda intro
PPTX
lecture11_GPUArchCUDA01.pptx
PPTX
GPU in Computer Science advance topic .pptx
PDF
Newbie’s guide to_the_gpgpu_universe
PPTX
gpuprogram_lecture,architecture_designsn
PDF
The Rise of Parallel Computing
PPTX
Graphics processing uni computer archiecture
PPT
Parallel computing with Gpu
PPT
3. CUDA_PPT.ppt info abt threads in cuda
PDF
The International Journal of Engineering and Science (The IJES)
PDF
CUG2011 Introduction to GPU Computing
PDF
Trip down the GPU lane with Machine Learning
PDF
GPU - An Introduction
PPTX
C for Cuda - Small Introduction to GPU computing
PPT
Exploring Gpgpu Workloads
Vpu technology &gpgpu computing
GPU Computing: A brief overview
Cuda Without a Phd - A practical guick start
Introduction to Accelerators
Cuda intro
lecture11_GPUArchCUDA01.pptx
GPU in Computer Science advance topic .pptx
Newbie’s guide to_the_gpgpu_universe
gpuprogram_lecture,architecture_designsn
The Rise of Parallel Computing
Graphics processing uni computer archiecture
Parallel computing with Gpu
3. CUDA_PPT.ppt info abt threads in cuda
The International Journal of Engineering and Science (The IJES)
CUG2011 Introduction to GPU Computing
Trip down the GPU lane with Machine Learning
GPU - An Introduction
C for Cuda - Small Introduction to GPU computing
Exploring Gpgpu Workloads
Ad

Recently uploaded (20)

PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PPTX
What’s under the hood: Parsing standardized learning content for AI
PPTX
Education and Perspectives of Education.pptx
PDF
My India Quiz Book_20210205121199924.pdf
PDF
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PPTX
Core Concepts of Personalized Learning and Virtual Learning Environments
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
CRP102_SAGALASSOS_Final_Projects_2025.pdf
PDF
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
PDF
Hazard Identification & Risk Assessment .pdf
PDF
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
PPTX
Computer Architecture Input Output Memory.pptx
PDF
English Textual Question & Ans (12th Class).pdf
PPTX
Climate Change and Its Global Impact.pptx
PDF
Journal of Dental Science - UDMY (2020).pdf
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Unit 4 Computer Architecture Multicore Processor.pptx
What’s under the hood: Parsing standardized learning content for AI
Education and Perspectives of Education.pptx
My India Quiz Book_20210205121199924.pdf
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
Cambridge-Practice-Tests-for-IELTS-12.docx
Core Concepts of Personalized Learning and Virtual Learning Environments
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
CRP102_SAGALASSOS_Final_Projects_2025.pdf
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
Hazard Identification & Risk Assessment .pdf
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
Computer Architecture Input Output Memory.pptx
English Textual Question & Ans (12th Class).pdf
Climate Change and Its Global Impact.pptx
Journal of Dental Science - UDMY (2020).pdf

Vpu technology &gpgpu computing

  • 1. VPU TECHNOLOGY &GPGPU COMPUTING Arka Ghosh(9007900477a@gmail.com) B.Tech Computer Science & Engineering DELIVERED AT Seacom Engineering College,CSE Dept DATE 7 th April’2011
  • 2. What Is VPU? VPU is Visual Processing Unit it is more generally known as Graphics Processing Unit or GPU. The Graphics Processing Unit is a MASSIVELY PARALAL & MASSIVELY MULTITHREADED microprocessor. HyBrid Solutions NVIDIA SLI ATI Raedon CROSSFIREX Why GPU? GPU is used for high performance Computing . Long time ago work of GPU was to offload & accelerate graphics rendering from the CPU, but now a days the scene has changed.GPU has capability to work like a CPU,in some complex computational cases it beats the CPU. GPU Solutions:- We can get GPU in two forms 1.Integrated GPU It is integrated on the chipset of MotherBoard.It has low memory bandwidth & its latency time is much more than Dedicated ones. i.e-NVIDIA 730a Chipset provides 8200GT GPU with 540Mhz core. 2.Discrete or Dedicated GPU It is the most power full form of GPU.it is generally installed on PCIe or AGP port of MotherBoard.It has its own memory module. i.e-ATI Raedon HD 5970 X2 has Compute power of 4.64 TeraFlops with 3200 Stream Processors & 1 Ghz core © Arka Ghosh 2011
  • 3. What is PPU? PPU is physics processing unit. which specialized for calculation of rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects. LARRABEE FUSION The Main Leader of PPU is AGIA PhysX. It consists of a general purpose RISC core controlling an array of custom SIMD floating point VLIW processors working in local banked memories, with a switch-fabric to manage transfers between them. There is no cache-hierarchy as in a CPU or GPU. GPUs vs PPUs:- The drive toward GPGPU is making GPUs more and more suitable for the job of a PPU. ULTIMATE FATE OF GPU:- 1.Intel’s LARRABEE 2.AMD’s FUSION © Arka Ghosh 2011
  • 4. -:INTO THE ARCHITECTURE:- Use of SPM:- SPM or SCRATCHPAD MEMORY is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress.Inreference to a microprocessor (&quot;CPU&quot;), scratchpad refers to a special high-speed memory circuit used to hold small items of data for rapid retrieval. EXAMPLE:-• NVIDIA's 8800 GPU running under CUDA provides 16KiB of Scratchpad per thread-bundle when being used for gpgpu tasks. STREAM PROCESSING:  The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. 1.Uniform Stream. Applications:- Compute Intensity Data Parallelism Data Locality Conventional, sequential paradigm Parallel SIMD paradigm, packed registers (SWAR) for(int el = 0; el < 100; el++) // for each vector vector_sum(result[el], source0[el], source1[el]); for(int i = 0; i < 100 * 4; i++) result[i] = source0[i] + source1[i]; © Arka Ghosh 2011
  • 5.  Graphics Pipeline  The graphics pipeline typically accepts some representation of a three-dimensional scene as an input and results in a 2D raster image as output. OpenGL and Direct3D are two notable graphics pipeline models accepted as widespread industry standards. Stages of the graphics pipeline:-> 1.Transformation 2.Per-vertex lighting 3.Viewing transformation 4.Primitives generation 5.Projection transformation 6.Clipping 7.Viewport transformation 8.Scan conversion or rasterization 9.Texturing, fragment shading 10.Display Shader  Shaders are used to program the graphics processing unit (GPU) programmable rendering pipeline, which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; with shaders, customized effects can be used. <<<Types Of Shader>>> Vertex shaders. Pixel shaders Geometrical shaders USEFULLNESS OF SHADER:- 1.Simplified graphic processing unit pipeline 2.Parallel processing Programming shaders We can programe shader by using OpenGL,Cg & Microsoft HLSL. © Arka Ghosh 2011
  • 6. GPU CLUSTER  What is Cluster? GPU CLUSTER  Each node of the cluster is GPU. 1.Homogeneous 2.Heterogeneous Components Hardware (Other):- I nterconnector Software:- 1. Operating System 2. GPU driver for the each type of GPU present in each cluster node. 3. Clustering API (such as the Message Passing Interface, MPI). .. Algorithm mapping GPU SWITCHING  Means Switching from one cluster node to another. WINDOWS Switching. LINUX Switching. © Arka Ghosh 2011
  • 7. What Is GPGPU? GPGPU stands for general purpose graphics processin unit computing.Using GPU as CPU is the GPGPU computing NVIDIA CUDA:- It is a GPGPU Computing architecture. It provides heterogeneous computing environment. Why GPU Computing? To achive high performance computing. Minimize ERROR LOW power Consumption..GO GREEN. NVIDIA FLEXES TESLA MUSCLE
  • 8. CUDA Kernels and Threads Parallel portions of an application are executed on the device as kernels One kernel is executed at a time Many threads execute each kernel Differences between CUDA and CPU threads CUDA threads are extremely lightweight CUDA uses 1000s of threads to achieve efficiency Multi-core CPUs can use only a few Definitions Device = GPU Host = CPU Kernel = function that runs on the device Data Movement Example int main(void) { float *a_h, *b_h; // host data float *a_d, *b_d; // device data int N = 14, nBytes, i ; nBytes = N*sizeof(float); a_h = (float *)malloc(nBytes); b_h = (float *)malloc(nBytes); cudaMalloc((void **) &a_d, nBytes); cudaMalloc((void **) &b_d, nBytes); for (i=0, i<N; i++) a_h[i] = 100.f + i; cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice); cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice); cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost); for (i=0; i< N; i++) assert( a_h[i] == b_h[i] ); free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d); return 0; } © Arka Ghosh 2011
  • 9. © Arka Ghosh2011 10-Series Architecture 240 thread processors execute kernel threads 30 multiprocessors , each contains 8 thread processors One double-precision unit Shared memory enables thread cooperation Thread Processors Multiprocessor Shared Memory Double
  • 10. Execution Model Software Hardware Threads are executed by thread processors Thread Thread Processor Multiprocessor Thread blocks are executed on multiprocessors Thread blocks do not migrate Several concurrent thread blocks can reside on Thread Block ... Grid Device one multiprocessor - limited by multiprocessor resources (shared memory and register file) A kernel is launched as a grid of thread blocks Only one kernel can execute on a device at one time © Arka Ghosh2011
  • 11. Tesla Architecture  © Arka Ghosh 2011
  • 12. Time GigaThread Hardware Thread Scheduler Concurrent Kernel Execution + Faster Context Switch Serial Kernel Execution Parallel Kernel Execution Kernel 1 Kernel 1 Kernel 2 Kernel 2 Ker 4 nel Kernel 3 Kernel 5 Kernel 3 Kernel 4 Kernel 5 Kernel 2 Kernel 2 © Arka Ghosh2011
  • 13. EXAMPLE:-> MATLAB CODE FOR SIMPLE FFT(CPU HOST MODE) FOR DEVICE( nVidia QUADRO Fx 5200*2) clear ALL; t1=cputime; x=rand(2^20,1); f=fft(x); t2=cputime; t3=t2-t1; Here t3=0.4056 Clear ALL; t1=cputime; x=rand(2^20,1); gx=gpuArray(x); f=fft(gx); t2=cputime; t3=t2-t1; Here t3=0.006056 clear ALL t1=cputime; x=rand(50); y=rand(50); z=rand(50); a=10; b=20; c=30; d=40; f=a*(x^2)+b*(x*y)+c*(y^3)+d*(z^4); net=feedforwardnet(800); net=trainlm(net,x,f); t2=cputime; t3=t2-t1; MATLAB code For Simple ANN For CPU t3=250.2154 For GPU t3=122.25 So we can see that The GPU is nearabout 204% efficient than CPU. © Arka Ghosh 2011
  • 14. CONCLUSION  C for the GPU Multi-GPU Computing Massively Multi-threaded Computing Architecture Compatible with Industry Standard Architectures WHERE GPGPU IS USED? MIT-for educational & Scientific Research Purpose Stanford University--for educational & Scientific Research Purpose NCSA (National Center for Supercomputing Applications) NASA Machine Learning & AI field Machine Vision(Mainly Robot Vision) Tablets © Arka Ghosh 2011
  • 15. Acknowledgement  Mriganka Chakraborty(prof. Secom Engineering College) Saibal Chakraborty Dr.Nicolas Pinto .prof. of MIT.-Advanced Supercomputing Dept T.Halfhill-NVIDIA Corp Developer Guide GOOGLE